01. Introducing Jay Alammar

Hi! Arpan again :)

Hi! Arpan again :)

Say hello to Jay Alammar! Jay has done some great work in interactive explorations of neural networks. If you haven't already, make sure you check out his blog.

Jay will be teaching you about a particular RNN architecture called "sequence to sequence". In this case, you feed in a sequence of data and the network will output another sequence. This is typically used in problems such as machine translation, where you'd feed in a sentence in English and get out a sentence in Arabic.

Hopefully, this conceptual overview of sequence-to-sequence learning will help you build solutions for problems like machine translation. As you go through the lesson, you can practice these concepts using code available here.

Note: All the code in this lesson is in TensorFlow.